optimistic optimization
- North America > Canada (0.04)
- Europe > Germany > Saxony-Anhalt > Magdeburg (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > France > Hauts-de-France > Pas-de-Calais (0.04)
Optimistic optimization of a Brownian
We address the problem of optimizing a Brownian motion. We consider a (random) realization $W$ of a Brownian motion with input space in $[0,1]$. Given $W$, our goal is to return an $\epsilon$-approximation of its maximum using the smallest possible number of function evaluations, the sample complexity of the algorithm. We provide an algorithm with sample complexity of order $\log^2(1/\epsilon)$. This improves over previous results of Al-Mharmah and Calvin (1996) and Calvin et al. (2017) which provided only polynomial rates.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Germany > Saxony-Anhalt > Magdeburg (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > France > Hauts-de-France > Pas-de-Calais (0.04)
Optimistic Optimization of a Deterministic Function without the Knowledge of its Smoothness
We consider a global optimization problem of a deterministic function f in a semimetric space, given a finite budget ofnevaluations. The functionf is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric l. We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of l. We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states. We then define a second algorithm, SOO, which does not require the knowledge of the semimetric l under which f is smooth, and whose performance is almost as good as DOO optimally-fitted.
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- North America > United States > New York (0.04)
- North America > United States > North Carolina (0.04)
- (2 more...)
Path-aware optimistic optimization for a mobile robot
Santejudean, Tudor, Busoniu, Lucian
We consider problems in which a mobile robot samples an unknown function defined over its operating space, so as to find a global optimum of this function. The path traveled by the robot matters, since it influences energy and time requirements. We consider a branch-and-bound algorithm called deterministic optimistic optimization, and extend it to the path-aware setting, obtaining path-aware optimistic optimization (OOPA). In this new algorithm, the robot decides how to move next via an optimal control problem that maximizes the long-term impact of the robot trajectory on lowering the upper bound, weighted by bound and function values to focus the search on the optima. An online version of value iteration is used to solve an approximate version of this optimal control problem. OOPA is evaluated in extensive experiments in two dimensions, where it does better than path-unaware and local-optimization baselines.
- North America > United States (0.04)
- Europe > Romania > Nord-Vest Development Region > Cluj County > Cluj-Napoca (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.89)
Optimistic Optimization of a Deterministic Function without the Knowledge of its Smoothness
We consider a global optimization problem of a deterministic function f in a semimetric space, given a finite budget of n evaluations. The function f is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric . We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of . We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states.
Optimistic Optimization of Gaussian Process Samples
Grosse, Julia, Zhang, Cheng, Hennig, Philipp
Bayesian optimization is a popular formalism for global optimization, but its computational costs limit it to expensive-to-evaluate functions. A competing, computationally more efficient, global optimization framework is optimistic optimization, which exploits prior knowledge about the geometry of the search space in form of a dissimilarity function. We investigate to which degree the conceptual advantages of Bayesian Optimization can be combined with the computational efficiency of optimistic optimization. By mapping the kernel to a dissimilarity, we obtain an optimistic optimization algorithm for the Bayesian Optimization setting with a run-time of up to $\mathcal{O}(N \log N)$. As a high-level take-away we find that, when using stationary kernels on objectives of relatively low evaluation cost, optimistic optimization can be strongly preferable over Bayesian optimization, while for strongly coupled and parametric models, good implementations of Bayesian optimization can perform much better, even at low evaluation cost. We argue that there is a new research domain between geometric and probabilistic search, i.e. methods that run drastically faster than traditional Bayesian optimization, while retaining some of the crucial functionality of Bayesian optimization.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > Panama (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Optimistic optimization of a Brownian
Grill, Jean-Bastien, Valko, Michal, Munos, Rémi
We address the problem of optimizing a Brownian motion. We consider a (random) realization $W$ of a Brownian motion with input space in $[0,1]$. Given $W$, our goal is to return an $\epsilon$-approximation of its maximum using the smallest possible number of function evaluations, the sample complexity of the algorithm. We provide an algorithm with sample complexity of order $\log^2(1/\epsilon)$. This improves over previous results of Al-Mharmah and Calvin (1996) and Calvin et al. (2017) which provided only polynomial rates. Our algorithm is adaptive---each query depends on previous values---and is an instance of the optimism-in-the-face-of-uncertainty principle.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Germany > Saxony-Anhalt > Magdeburg (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > France > Hauts-de-France > Pas-de-Calais (0.04)
Optimistic optimization of a Brownian
Grill, Jean-Bastien, Valko, Michal, Munos, Remi
We address the problem of optimizing a Brownian motion. We consider a (random) realization $W$ of a Brownian motion with input space in $[0,1]$. Given $W$, our goal is to return an $\epsilon$-approximation of its maximum using the smallest possible number of function evaluations, the sample complexity of the algorithm. We provide an algorithm with sample complexity of order $\log^2(1/\epsilon)$. This improves over previous results of Al-Mharmah and Calvin (1996) and Calvin et al. (2017) which provided only polynomial rates. Our algorithm is adaptive---each query depends on previous values---and is an instance of the optimism-in-the-face-of-uncertainty principle.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Germany > Saxony-Anhalt > Magdeburg (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > France > Hauts-de-France > Pas-de-Calais (0.04)
Optimistic optimization of a Brownian
Grill, Jean-Bastien, Valko, Michal, Munos, Remi
We address the problem of optimizing a Brownian motion. We consider a (random) realization $W$ of a Brownian motion with input space in $[0,1]$. Given $W$, our goal is to return an $\epsilon$-approximation of its maximum using the smallest possible number of function evaluations, the sample complexity of the algorithm. We provide an algorithm with sample complexity of order $\log^2(1/\epsilon)$. This improves over previous results of Al-Mharmah and Calvin (1996) and Calvin et al. (2017) which provided only polynomial rates. Our algorithm is adaptive---each query depends on previous values---and is an instance of the optimism-in-the-face-of-uncertainty principle.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Germany > Saxony-Anhalt > Magdeburg (0.04)
- Europe > France > Hauts-de-France > Pas-de-Calais (0.04)